This textbook presents basic knowledge and essential toolsets needed for people who want to step into artificial intelligence (AI). The book is especially suitable for those college students, graduate students, instructors, and IT hobbyists who have an engineering mindset. That is, it serves the idea of getting the job done quickly and neatly with an adequate understanding of why and how. It is designed to allow one to obtain a big picture for both AI and essential AI topics within the shortest amount of time.
Harvard: Liu, Z. “Leo,” 2025. Artificial Intelligence for Engineers. [online] Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-75953-6.
Vancouver: 1. Liu Z “Leo.” Artificial Intelligence for Engineers [Internet]. Springer Nature Switzerland; 2025. Available from: http://dx.doi.org/10.1007/978-3-031-75953-6.
IEEE: [1] Z. “Leo” Liu, Artificial Intelligence for Engineers. Springer Nature Switzerland, 2025. doi: 10.1007/978-3-031-75953-6.
ACM: [1] Liu, Z. “Leo” 2025. Artificial Intelligence for Engineers. Springer Nature Switzerland.
--------3.5.1 Deep Learning Frameworks
--------3.5.2 TensorFlow
------------Overview of APIs
------------Computational Graph
------------Variables
------------Placeholders and Comprehensive Example
------------Comprehensive Example
--------3.5.3 Keras
------------Installation and Data Preparation
------------Model Establishment with the Sequential API
------------Model Establishment with the Functional API
------------Training and Result Visualization
----3.6 Reinforcement Learning
--------3.6.1 Overview of RL Tools
--------3.6.2 OpenAI Gym
----3.7 Practice: Use, Compare, and Understand TensorFlow and Keras for Problem Solving
--------10.3.1 Basic Bagging
--------10.3.2 Random Forest
----10.4 Boosting
--------10.4.1 AdaBoost
------------Loss Function
------------Update on Model Weights
------------Update on Sample Weights/Distribution
------------Pseudo-Code
--------10.4.2 Gradient Boosting
----10.5 Stacking
----10.6 Practice: Code and Evaluate Ensemble Learning Methods
--------11.3.1 Math Framework of K-Means Algorithm
--------11.3.2 Implementation of K-Means
--------11.3.3 Initialization
--------11.3.4 Selection of K
--------11.3.5 Pros and Cons
--------11.8.1 Overview of Evaluation Metrics
--------11.8.2 Internal Evaluation
------------Silhouette Coefficient
------------Davies-Bouldin Index
------------Dunn Index
--------11.8.3 External Evaluation
------------Rand Index
------------Adjusted Rand Index
------------Normalized Mutual Information (NMI)
------------Fowlkes-Mallows Index
------------Contingency Matrix
----11.9 Practice: Test and Modify Clustering Code for Problem Solving
----12.5 Feature Extraction Method 2: Linear Discriminant Analysis
--------12.5.1 Concept and Main Idea
--------12.5.2 Theoretical Basis
------------Rayleigh Quotient and Generalized Rayleigh Quotient
------------Binary Classification
------------Multiclass Classification
--------12.5.3 Implementation
----12.6 Practice: Develop and Modify Code for PCA and LDA
--------13.4.1 Why Not Use Binary Classification for Anomaly Detection?
--------13.4.2 Modification of Supervised Classification Methods for Anomaly Detection
----16.4 Objective Function and Policy Gradient Theorem
--------16.4.1 Objective Function
--------16.4.2 Policy Gradient Theorem
--------16.4.3 Simple Episodic Monte Carlo Implementation of Policy Gradient: REINFORCE V1
--------16.4.4 Strategies for Improving Policy Gradient Implementation
----16.5 Policy Function
--------16.5.1 Linear Policy Function for Discrete Actions: Formulation 1
--------16.5.2 Linear Policy Function for Discrete Actions: Formulation 2
--------16.5.3 Policy Function for Continuous Actions
----16.6 Common Policy Gradient Algorithms
--------16.6.1 More Objective Function Formulations
--------16.6.2 Simple Stepwise Monte Carlo Implementation of Policy Gradient: REINFORCE V2
--------16.6.3 Actor-Critic
--------16.6.4 Actor-Critic with Baseline
--------16.6.5 More Policy Gradient Algorithms
----16.7 Practice: Understand and Modify Policy Gradient Code for Addressing RL Problem
17 Appendices
----17.1 Overview
----17.2 Mathematics for Machine Learning
--------17.2.1 Statistics
------------Random Variables
------------Probabilities
------------Use of Probability in Machine Learning
------------Probability Distributions
--------17.2.2 Information Theory
--------17.2.3 Array Operations
------------Matrix Operations
------------General Array Operations
------------Array Calculus